Goto

Collaborating Authors

 physics-based eip


InjectingDomainKnowledgefromEmpirical InteratomicPotentialstoNeuralNetworksfor PredictingMaterialProperties

Neural Information Processing Systems

This limited range of interaction gives rise to the concept of an atomicenvironment. Theconsequence ofthislocality is that an infinite system can be modeled exactly using afinite periodic cell so long as asufficient number ofperiodic images surrounding itareexplicitly accounted for. It consists of configurations for elemental Ag and Au, as well as AgAu binary alloy. The dataset is generated using an activelearning strategy. This process is iterated multiple times.



Injecting Domain Knowledge from Empirical Interatomic Potentials to Neural Networks for Predicting Material Properties

Neural Information Processing Systems

The dataset is generated using an active learning strategy. Then, an ensemble of models are trained on the data and new configurations are selected to be further labelled by DFT based on the uncertainty obtained from the ensemble. This process is iterated multiple times. We show the selected EIPs used in our experiments and their accuracy in Table 2 for reference. Table 2: EIPs used in experiments and their accuracy.



Injecting Domain Knowledge from Empirical Interatomic Potentials to Neural Networks for Predicting Material Properties

Shui, Zeren, Karls, Daniel S., Wen, Mingjian, Nikiforov, Ilia A., Tadmor, Ellad B., Karypis, George

arXiv.org Artificial Intelligence

For decades, atomistic modeling has played a crucial role in predicting the behavior of materials in numerous fields ranging from nanotechnology to drug discovery. The most accurate methods in this domain are rooted in first-principles quantum mechanical calculations such as density functional theory (DFT). Because these methods have remained computationally prohibitive, practitioners have traditionally focused on defining physically motivated closed-form expressions known as empirical interatomic potentials (EIPs) that approximately model the interactions between atoms in materials. In recent years, neural network (NN)-based potentials trained on quantum mechanical (DFT-labeled) data have emerged as a more accurate alternative to conventional EIPs. However, the generalizability of these models relies heavily on the amount of labeled training data, which is often still insufficient to generate models suitable for general-purpose applications. In this paper, we propose two generic strategies that take advantage of unlabeled training instances to inject domain knowledge from conventional EIPs to NNs in order to increase their generalizability. The first strategy, based on weakly supervised learning, trains an auxiliary classifier on EIPs and selects the best-performing EIP to generate energies to supplement the ground-truth DFT energies in training the NN. The second strategy, based on transfer learning, first pretrains the NN on a large set of easily obtainable EIP energies, and then fine-tunes it on ground-truth DFT energies. Experimental results on three benchmark datasets demonstrate that the first strategy improves baseline NN performance by 5% to 51% while the second improves baseline performance by up to 55%. Combining them further boosts performance.